The Rate of Convergence of Augmented Lagrange Method for a Composite Optimization Problem

نویسندگان

  • LIWEI ZHANG
  • JIHONG ZHANG
  • YULE ZHANG
چکیده

In this paper we analyze the rate of local convergence of the augmented Lagrange method for solving optimization problems with equality constraints and the objective function expressed as the sum of a convex function and a twice continuously differentiable function. The presence of the non-smoothness of the convex function in the objective requires extensive tools such as the second-order variational analysis on the Moreau-Yosida regularization of a convex function and matrix techniques for estimating the generalized Hessian of the dual function defined by the augmented Lagrange. With two conditions, we prove that, the rate of convergence for the augmented Lagrange method is linear and the ratio constant is proportional to 1/c, where c is the penalty parameter that exceeds a threshold c > 0. As an illustrative example, for nonlinear semidefinite programming problem, we show that the assumptions used in Sun et al. [13], for the rate of convergence of the augmented Lagrange method, are sufficient to the two conditions adopted in this paper.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the non-ergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming

In this paper, we consider the linearly constrained composite convex optimization problem, whose objective is a sum of a smooth function and a possibly nonsmooth function. We propose an inexact augmented Lagrangian (IAL) framework for solving the problem. The proposed IAL framework requires solving the augmented Lagrangian (AL) subproblem at each iteration less accurately than most of the exist...

متن کامل

Nonlinear rescaling vs. smoothing technique in convex optimization

We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for any given smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling (NR) the constraints of a given constrained optimization problem into an equivalent set of constraints. The constraints transformation is scaled by a ...

متن کامل

Proximal Point Nonlinear Rescaling Method for Convex Optimization

Nonlinear rescaling (NR) methods alternate finding an unconstrained minimizer of the Lagrangian for the equivalent problem in the primal space (which is an infinite procedure) with Lagrange multipliers update. We introduce and study a proximal point nonlinear rescaling (PPNR) method that preserves convergence and retains a linear convergence rate of the original NR method and at the same time d...

متن کامل

A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization

We present and analyze an interior-exterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lagrangian methods. It is based on the MBAL function, which treats inequality constraints with a modifi...

متن کامل

Local Convergence of the Method of Multipliers for Variational and Optimization Problems under the Sole Noncriticality Assumption

We present local convergence analysis of the method of multipliers for equality-constrained variational problems (in the special case of optimization, also called the augmented Lagrangian method) under the sole assumption that the dual starting point is close to a noncritical Lagrange multiplier (which is weaker than second-order sufficiency). Local superlinear convergence is established under ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016